Error Bound and Convergence Analysis of Matrix Splitting Algorithms for the Affine Variational Inequality Problem
نویسندگان
چکیده
Consider the affine variational inequality problem. It is shown that the distance to the solution set from a feasible point near the solution set can be bounded by the norm of a natural residual at that point. This bound is then used to prove linear convergence of a matrix splitting algorithm for solving the symmetric case of the problem. This latter result improves upon a recent result of Luo and Tseng that further assumes the problem to be monotone.
منابع مشابه
Convergence rate analysis of iteractive algorithms for solving variational inequality problems
We present a unified convergence rate analysis of iterative methods for solving the variational inequality problem. Our results are based on certain error bounds; they subsume and extend the linear and sublinear rates of convergence established in several previous studies. We also derive a new error bound for γ -strictly monotone variational inequalities. The class of algorithms covered by our ...
متن کاملError Bound and Reduced-Gradient Projection Algorithms for Convex Minimization over a Polyhedral Set
Consider the problem of minimizing, over a polyhedral set, the composition of an afline mapping with a strongly convex differentiable function. The polyhedral set is expressed as the intersection of an affine set with a (simpler) polyhedral set and a new local error bound for this problem, based on projecting the reduced gradient associated with the affine set onto the simpler polyhedral set, i...
متن کاملFurther Applications of a Splitting Algorithm to Decomposition in Variational Inequalities and Convex Programming
A classical method for solving the variational inequality problem is the projection algorithm. We show that existing convergence results for this algorithm follow from one given by Gabay for a splitting algorithm for finding a zero of the sum of two maximal monotone operators. Moreover, we extend the projection algorithm to solve any monotone affine variational inequality problem. When applied ...
متن کاملSequential Optimality Conditions and Variational Inequalities
In recent years, sequential optimality conditions are frequently used for convergence of iterative methods to solve nonlinear constrained optimization problems. The sequential optimality conditions do not require any of the constraint qualications. In this paper, We present the necessary sequential complementary approximate Karush Kuhn Tucker (CAKKT) condition for a point to be a solution of a ...
متن کاملStrong convergence of variational inequality problem Over the set of common fixed points of a family of demi-contractive mappings
In this paper, by using the viscosity iterative method and the hybrid steepest-descent method, we present a new algorithm for solving the variational inequality problem. The sequence generated by this algorithm is strong convergence to a common element of the set of common zero points of a finite family of inverse strongly monotone operators and the set of common fixed points of a finite family...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- SIAM Journal on Optimization
دوره 2 شماره
صفحات -
تاریخ انتشار 1992